Reducing Adversarial Vulnerability through Adaptive Training Batch Size

نویسندگان

چکیده

Neural networks possess an ability to generalize well data distribution, extent that they are capable of fitting a randomly labeled data. But also known be extremely sensitive adversarial examples. Batch Normalization (BatchNorm), very commonly part deep learning architecture, has been found increase vulnerability. Fixup Initialization (Fixup Init) shown as alternative BatchNorm, which can considerably strengthen the against This robustness improved further by employing smaller batch size in training. The latter, however, comes with tradeoff form significant training time (up ten times longer when reducing from default 128 8 for ResNet-56). In this paper, we propose workaround problem starting small and gradually it larger ones during We empirically show our proposal still improve 5.73\%) ResNet-56 Init 128. At same time, keeps shorter (only 4 longer, instead 10 times).

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multiple Batch Sizing through Batch Size Smoothing

Batch sizing in different planning period is categorized as a classical problem in production planning, that so many exact & heuristic methods have been proposed to solve this problem, each of which considering various aspects of the original problem. The solution obtained from majority – e.g. MRP – is in this format that there may be some periods of idleness or each period should produce a...

متن کامل

multiple batch sizing through batch size smoothing

batch sizing in different planning period is categorized as a classical problem in production planning, that so many exact & heuristic methods have been proposed to solve this problem, each of which considering various aspects of the original problem. the solution obtained from majority – e.g. mrp – is in this format that there may be some periods of idleness or each period should produ...

متن کامل

Dramatically Reducing Training Data Size Through Vocabulary Saturation

Our field has seen significant improvements in the quality of machine translation systems over the past several years. The single biggest factor in this improvement has been the accumulation of ever larger stores of data. However, we now find ourselves the victims of our own success, in that it has become increasingly difficult to train on such large sets of data, due to limitations in memory, ...

متن کامل

Adaptive Batch Size for Safe Policy Gradients

PROBLEM • Monotonically improve a parametric gaussian policy πθ in a continuous MDP, avoiding unsafe oscillations in the expected performance J(θ). • Episodic Policy Gradient: – estimate ∇̂θJ(θ) from a batch of N sample trajectories. – θ′ ← θ+Λ∇̂θJ(θ) • Tune step size α and batch size N to limit oscillations. Not trivial: – Λ: trade-off with speed of convergence← adaptive methods. – N : trade-off...

متن کامل

Learning Privacy Preserving Encodings through Adversarial Training

We present a framework to learn privacypreserving encodings of images (or other highdimensional data) to inhibit inference of a chosen private attribute. Rather than encoding a fixed dataset or inhibiting a fixed estimator, we aim to to learn an encoding function such that even after this function is fixed, an estimator with knowledge of the encoding is unable to learn to accurately predict the...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Jurnal Ilmu Komputer dan Informasi

سال: 2021

ISSN: ['2502-9274', '2088-7051']

DOI: https://doi.org/10.21609/jiki.v14i1.907